Applying the Publication Power Approach to Artificial Intelligence Journals

نویسنده

  • Lior Rokach
چکیده

This study evaluates the utility of a Publication Power Approach (PPA) for assessing the quality of journals in the field of artificial intelligence. PPA is compared with the Thomson-Reuters Institute for Scientific Information (TR) five-year and two-year impact factors and with expert opinion. The ranking produced by the method under study is only partially correlated with citation-based measures (TR), but exhibits close agreement with expert survey rankings. A simple average of TR and power rankings results in a new ranking that is highly correlated with the expert survey rankings. This evidence suggests that power ranking can contribute to evaluating AI journals. Introduction and Related Work Because journals serve as the main outlets for publishing scientific research, it is not surprising that one of the most widely studied problems in scientometrics is determining the merit of academic journals and ranking them accordingly. Although journal ranking helps academic libraries to select journals, it is often and more importantly used as a measure of research quality. For example, the Israel Higher Education Planning and Budgeting Committee (VATAT) financially rewards universities for publishing in top-tier journals, and many university administrators around the world evaluate their scholars according to their publications as part of the tenure, promotion, and reward process. Given a journal’s ranking, researchers can target their papers to top-ranked journals and improve their chances for promotion. The four common approaches for generating journal rankings are based on opinion surveys, citations, authors’ affiliation, and behavioral approaches. In expert opinion surveys, a number of scholars rank each journal according to a predefined set of criteria. The results reflect the cumulative peer opinion of a representative group of experts within a particular discipline or field. However, expert surveys have also been criticized for their subjectivity, the lack of clarity of their rating criteria (Holsapple, 2008), and various biases (such as preferring outlets that publish more articles per year; Serenko & Dohan, 2011). Finally, establishing a valid expert survey that includes a sufficiently large number of qualitative responders can be time-consuming. Many citation-based measures have been suggested for ranking journals, including impact factors (Garfield, 2006), the eigenfactor (Bergstrom, 2007), and the h-index and its variants (Harzing et al., 2007). The main advantage of these measures is their objectivity; however, they have also been criticized, with some claiming that a few highly cited papers skew the citation distribution (Calver & Bradley, 2009) or that not all citations have the same significance (Holsapple, 2008). Moreover, because citation patterns vary across disciplines, it is very difficult to evaluate multidisciplinary journals. Research shows that using citation-based measures tends to generate journal rankings that are only weakly correlated with expert surveys (see, for instance, Schloegl & Stock, 2004 and Serenko & Dohan, 2011 for a complete list). Even when a strong correlation can be found, there are still considerable differences in the ranking of certain journals (Serenko & Dohan, 2011). A relatively new approach to ranking is based on the author’s university affiliation. The underlying premise is that tenured faculty members of prominent research universities tend to publish their work in premier journals. The Author Affiliation Index (AAI) of a journal (or set of journals) is defined as the percentage of authors who publish in that journal (or set of journals) and are affiliated with a predetermined group of top-rated universities (or university departments) in the domain under study (Harless & Reilly, 1998; Cronin & Meho, 2008; Agrawal, 2011). However, authorbased methods have drawbacks. The first limitation is the need to select a set of leading affiliations. If the set is too narrowly defined, then it might not be sufficient to rank journals reliably (because of the small sample size). On the other hand, if the set is defined too broadly, it might include universities that are not at the required research level and thereby distort the rankings. Therefore, author-based measures can be used to identify premier journals, but not for ranking non-premier journals. Behavior-based approaches examine the actual publishing behaviors of tenured researchers at an independently determined set of prominent research universities. This approach assumes that these particular faculty members tend to publish their works in outlets which they regard as of high quality in the field under study. The behavior of these researchers can be trusted because they have demonstrated a level of research excellence which is recognized by their peers (who have participated in their tenure and promotion committees). Holsapple (2008) has developed the publication power approach (PPA) for identifying the premier journals in a specific domain. The PPA of a journal is determined by how many prominent researchers decide to publish their research results in that journal and at what frequency. Table 1 summarizes the various approaches to ranking journals and specifies the advantages and limitations of each approach. As can be seen from Table 1, and as has been indicated by Holsapple and Lee-Post (2010), the recently developed PPA sidesteps the limitations of the other three approaches. For example, various AI researchers have noted that according to the TR two-year impact factor for 2010, the Journal of Machine Learning Research was ranked much higher than Machine Learning (rank 9 vs. rank 31), while according to the expert survey, the order should be reversed (Serenko & Dohan, 2011). This discrepancy can be explained by the limitations of citation-based approaches that tend to prefer open-access journals over other journals. As will be seen later, PPA does not suffer from this limitation and obtains the correct order. Thus, PPA can potentially provide rankings from a different perspective. In particular, PPA provides secondary evidence for highly accepted approaches (expert surveys and citations) and indirect indications for objectively measuring journal quality. Several rankings of AI journals are available in the literature (Cheng et al., 1996). Serenko (2010) compared different citation-based methods for ranking AI journals, while Serenko and Dohan (2011) reported on expert surveys in this field. However, there have been no reports to date on author-based rankings in AI. Therefore, the goal of this paper is to apply PPA to the AI field and to compare its results to existing rankings based on citations and expert surveys. Methods 1. 108 peer-reviewed AI journals were identified based on the sub-category “Computer Sciences – Artificial Intelligence” as indexed by the ThomsonReuters Web of Knowledge (WoK). The bibliographic data used in this paper were extracted from the WoK. These data refer to all journal publications of the benchmark scholar. 2. 199 active AI scholars were selected in the manner described as follows. Instead of selecting tenured AI faculty members as defined by a set of benchmark institutions, as proposed by Holsapple (2008), the recipients of the Association for the Advancement of Artificial Intelligence (AAAI) Fellowship Award were selected as benchmark scholars. This provided a degree of flexibility because the list contains researchers with various affiliations. The AAAI Fellowship Award recognizes a small percentage of AAAI researchers who have made significant, sustained contributions to the field of artificial intelligence. This award has become very selective since 1995. Between 1990 and 1994, 147 researchers won the award; from 1995 to 2011, only 106 researchers gained this coveted prize. The list of current AAAI fellows contains 199 active scholars (http://www.aaai.org/Awards/fellowscurrent.php). 3. TR records were used to extract the bibliographic data from all papers (6,738 papers in total) that were written by recipients of the AAAI Fellowship Award from 1995 to 2010 inclusive. Note that when the PPA was originally applied to the field of information systems, a slightly longer period of time, a quarter century, was used. However, because the benchmark list used here contains more scholars and because of the rapidly evolving nature of the AI domain, a shorter period of time was chosen for this research. 4. To address the issue of name ambiguity, the “Author Finder” feature in the WoK was used to select authors according to their affiliations and publication 1 http://www.aaai.org/Awards/fellows.php category. Note that because certain benchmark researchers changed their affiliation over time, their resumes had to be used to identify this situation and to include all their affiliations. 5. Each journal was analyzed in terms of both the number of prominent researchers who publish manuscripts in this journal (publishing breadth) and the frequency with which they publish (publishing intensity). A journal’s publishing breadth is the number of prominent researchers who have authored at least one article in this journal. A journal’s publishing intensity is the sum of the number of times that this particular journal has acted as a publication outlet for prominent researchers. 6. Finally, the publication power of a journal is defined as the product of its publishing intensity and its publishing breadth (Holsapple & O’Leary, 2009; Holsapple & Lee-Post, 2010). Results and Discussion Table 2 shows the ranking of a number of AI journals. Note that the PPA was capable of ranking only 78 out of 108 journals in the KoW category of AI. This can be explained by the fact that top-rated researchers seldom publish in non-prestige journals. Therefore, some journals had a power of zero and were not included in the analysis. Four of the journals had a publication power of 10,000 or higher. This power level is equivalent to 100 benchmark researchers collectively having authored 100 articles in the journal. Table 1 also specifies the TR impact factor for 2010 and the expert survey score that was reported by Serenko and Dohan (2011) based on 873 experts. A natural way to test the validity of the proposed method is to compare it with peer review. Several researchers argue that a bibliometric-based journal ranking procedure that correlates positively with expert surveys of journal quality should be preferred (see, for example, McAllister et al., 1989; Hodge & Lacasse, 2011; Harnad, 2008); others do not agree with this claim. In either case, highly correlated measures have a better chance of being accepted by the community. Table 3 shows Spearman rank correlations for the ranks obtained using all scores presented in Table 2. It was found that the PPA was only weakly correlated with the TR impact factors (rho=0.192 in the case of a five-year impact factor). PPA, TR twoyear, and TR five-year impact factors all have high levels of correlation with expert survey rankings (rho=0.498, 0.514, and 0.564 respectively). Note that the level of correlation found between the TR two-year impact factor and the expert survey ranking is consistent with previous findings (rho=0.508 as reported by Serenko and Dohan, 2011). Table 1: Various approaches for ranking journals. Approach Advantages Limitations Citation Based (including impact factor). • Objective • Highly accepted (Lowry et al., 2007) • Can compare journals across different disciplines (Lowry et al., 2007) • Highly disputed if impact factor endorses the quality of all articles (Seglen, 1997, Lowry et al., 2007). • Long tail: A fortuitous publication of one seminal work can skew the entire results for a given journal (Calver & Bradley, 2009). • Ignores semantics of references (Holsapple, 2008) by simply assuming that every citation in an article's reference list is equally important. • Self-citations (Rousseau, 1999). • Not useful for ranking small fields in which only a few of these journals appear in journal ranking indexes (Seglen, 2006). • Not useful for ranking niche journals which are read and cited by a small community of researchers (Serenko and Dohan, 2011). • Biased towards open and online journals which are not constrained by physical print limitations (Antelman, 2004). • Biased towards journals that have been longer in-print (Serenko and Dohan, 2011). • Citation habits can vary greatly by discipline and country, with non-English speaking academics being cited far less often (Seglen, 2006). • Citations can be manipulated through editorial practices such as requiring accepted authors to cite more articles previously published in their specific journals (Sevinc, 2004) • Review articles can inflate citation numbers (Seglen, 2006). • Journal databases may contain errors resulting in incorrectly reported journal impact indices (Elkins et al. 2010). • Journal rankings can differ depending on how the citation counts are analyzed (total, age-adjustment, etc.) which can lead to confusion (Holsapple and LeePost, 2010). • Cannot be used for ranking new outlets. Expert Survey • Highly accepted (Lowry et al., 2007) • "Journal’s ranking position reflects a cumulative opinion of a representative group of its readers and contributors." (Serenko and Dohan, 2011, page 630) • Allows rankings to be produced for underrepresented niche research areas (Seglen, 2006). • Allows rankings by various demographics (Lowry et al., 2007). • Subjective • Difficulties in obtaining sufficiently large and representative sample (Gorman and Kanet, 2005; Saha et al., 2003). • Sensitive to various factors including different time periods, respondents' research fields, different sets and numbers of anchor journals, and ranking criteria. (Olson 2005). • Less effective when large, predefined lists are used (Lowry et al., 2007) • Vague about rating criteria that may not be interpreted uniformly by all respondents (Holsapple, 2008). • Biased in various ways(Holsapple, 2008). • It takes long time for most respondents to change their opinion about the journal’s quality (Tahai and Meyer, 1999), which produces inflexible ranking lists. • Affected by intra-institutional politics (Adler & Harzing, 2009) because some scholars may prefer the outlets appearing in their internal ranking lists. • Exposure effect: participants of journal ranking surveys may prefer certain journals merely because they are more familiar to them (Serenko and Bontis, 2011). Therefore newer and more specialized journals are ignored (Gallivan and Benbunan-Fich, 2007). • Path Dependency ; many expert surveys are based on previous rankings. Therefore making it relatively more difficult for newer or niche journals to break into the rankings (Truex, et al., 2009). AAI (Author Affiliation • Objective • Robust with respect to • The precise size of a university set is unclear. If it is too small the results will be biased. If it is too large it will be difficult to differentiate among the journals Index) changes in input, such as number of top universities taken into account (Gorman and Kanet, 2005). • Easy-to-use (Cronin and Meho, 2008). • Stable over time (Gorman and Kanet, 2005). • Can provide peer groups of journals of equivalent quality (Gorman and Kanet, 2005). (Holsapple and Lee-Post, 2010). • Working only with prominent universities can be misleading; some outstanding researchers may choose to work at an institution of modest ranking (Cronin and Meho, 2008). • While journal's decisions should be made indifferent to author affiliation, practically institutional affiliation can sometimes influence publication decisions (Cronin and Meho, 2008). • Not useful for ranking "loosely structured and less clearly delineated fields such Library and Information Science" (Cronin and Meho, 2008, page 1864). • The resultant journal rankings are limited to the particular journals for which AAI is calculated. A journal can be partially relevant to the examined field but it is still highly ranked because many of those who publish in it are faculty members at prominent universities (Holsapple and Lee-Post, 2010). • Defining the set of prominent universities is partially based on the publications their faculties have in a preselected set of high-quality journals. Thus it creates a circular effect which biases the results (Holsapple and Lee-Post, 2010). PAA (Publication Power Approach) • Objective • Provides a multi dimensional metric of journal importance (Holsapple and LeePost, 2010) • Allows establishing of national or regional journal rankings (Serenko and Jiao, 2011) • Sensitive to size and composition of the benchmark set. Thus the benchmark set should be carefully selected (Holsapple, 2008) . • Regardless of the benchmarks used, certain outstanding journals from reference disciplines or specialty niches can be excluded (Holsapple, 2008). • Does not addresses cases of multi-authored articles by the benchmark scholar set (Holsapple, 2008). • In its original form, it does not consider the number of papers (or pages or words) published annually in the various journals (Holsapple and O'Leary, 2009). • Journal characteristics may unduly influence researchers' behavior (such as acceptance rate or review time (Holsapple and Lee-Post, 2010). • Existing journal ranking may unduly influence researchers' behavior (Holsapple, 2008). • It is sensitive to the time window used (Holsapple and Lee-Post, 2010). • As changes happen over time (new researchers become tenured while other retired), the benchmark set is not stable over time and thus we should expect that the ranking will also vary over time (Holsapple and Lee-Post, 2010). Table 2: AI journals ranked according to the publication power.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Artificial Intelligence Approach Analyzing Management Ability Based on Accounting and Corporate Governance Criteria

The aim of this research is the analysis of management ability using accounting and corporate governance criteria and also artificial intelligence. The primary independent variables in this study include regulatory variables (characteristics of corporate governance and audit committee) and accounting variables (performance and risk criteria). We took advantage of Demirjian index to measure mana...

متن کامل

Artificial Intelligence Based Approach for Identification of Current Transformer Saturation from Faults in Power Transformers

Protection systems have vital role in network reliability in short circuit mode and proper operating for relays. Current transformer often in transient and saturation under short circuit mode causes mal-operation of relays which will have undesirable effects. Therefore, proper and quick identification of Current transformer saturation is so important. In this paper, an Artificial Neural Network...

متن کامل

Load frequency control of two-area interconnected power system using fuzzy logic control approach

Power systems are composed of power units that are constantly connected to each other and the electric power flux is constantly moving between them. All systems must be implemented in such a way that not only under normal conditions but also unwanted inputs or disturbances, are applied. It also remains stable or returns to a stable name at the earliest possible time. The fundamental factors...

متن کامل

An Overview of the Artificial Intelligence Applications in Identifying and Combating the Covid-19 Pandemic

Intruduction: In late 2019, people around the world became infected with Covid-19 by the outbreak, the pandemy and epidemy of this disease. To this end, researchers in various fields are seeking to find solutions to the problems related to the control and management of crises. The transmission power of the new corona virus has drawn the attention of experts in the use of artificial intelligence...

متن کامل

A Novel Reference Current Calculation Method for Shunt Active Power Filters using a Recursive Algebraic Approach

This paper presents a novel method to calculate the reference source current and the referencecompensating current for shunt active power filters (SAPFs). This method first calculates theamplitude and phase of the fundamental load current from a recursive algebraic approach blockbefore calculating the displacement power factor. Next, the amplitude of the reference mains currentis computed with ...

متن کامل

On Applying Artificial Intelligence Techniques to Building Sea-Going Ships

Sea-going ships are autonomous complex objects that are intensively automatized due to safety and economic factors. The issue is complicated enough to entail a need for powerful methods such as Artificial Intelligence techniques. This repon gives a brief description of several problems of ship-building for which Artificial Intelligence techniques seem to be particularly suitable. They include: ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • JASIST

دوره 63  شماره 

صفحات  -

تاریخ انتشار 2012